prompt injection defense Flash News List | Blockchain.News
Flash News List

List of Flash News about prompt injection defense

Time Details
2025-11-13
10:00
OpenAI Publishes GPT-5.1-Codex-Max System Card: Comprehensive Safety Mitigations for Prompt Injection, Agent Sandboxing, and Configurable Network Access

According to OpenAI, the GPT-5.1-Codex-Max system card documents model-level mitigations including specialized safety training for harmful tasks and defenses against prompt injections, outlining concrete guardrails for safer deployment workflows (source: OpenAI). OpenAI also reports product-level mitigations such as agent sandboxing and configurable network access, specifying operational controls that restrict how agents interact with external resources (source: OpenAI).

Source